6 research outputs found
RS2G: Data-Driven Scene-Graph Extraction and Embedding for Robust Autonomous Perception and Scenario Understanding
Human drivers naturally reason about interactions between road users to
understand and safely navigate through traffic. Thus, developing autonomous
vehicles necessitates the ability to mimic such knowledge and model
interactions between road users to understand and navigate unpredictable,
dynamic environments. However, since real-world scenarios often differ from
training datasets, effectively modeling the behavior of various road users in
an environment remains a significant research challenge. This reality
necessitates models that generalize to a broad range of domains and explicitly
model interactions between road users and the environment to improve scenario
understanding. Graph learning methods address this problem by modeling
interactions using graph representations of scenarios. However, existing
methods cannot effectively transfer knowledge gained from the training domain
to real-world scenarios. This constraint is caused by the domain-specific rules
used for graph extraction that can vary in effectiveness across domains,
limiting generalization ability. To address these limitations, we propose
RoadScene2Graph (RS2G): a data-driven graph extraction and modeling approach
that learns to extract the best graph representation of a road scene for
solving autonomous scene understanding tasks. We show that RS2G enables better
performance at subjective risk assessment than rule-based graph extraction
methods and deep-learning-based models. RS2G also improves generalization and
Sim2Real transfer learning, which denotes the ability to transfer knowledge
gained from simulation datasets to unseen real-world scenarios. We also present
ablation studies showing how RS2G produces a more useful graph representation
for downstream classifiers. Finally, we show how RS2G can identify the relative
importance of rule-based graph edges and enables intelligent graph sparsity
tuning
CARMA: Context-Aware Runtime Reconfiguration for Energy-Efficient Sensor Fusion
Autonomous systems (AS) are systems that can adapt and change their behavior
in response to unanticipated events and include systems such as aerial drones,
autonomous vehicles, and ground/aquatic robots. AS require a wide array of
sensors, deep-learning models, and powerful hardware platforms to perceive and
safely operate in real-time. However, in many contexts, some sensing modalities
negatively impact perception while increasing the system's overall energy
consumption. Since AS are often energy-constrained edge devices,
energy-efficient sensor fusion methods have been proposed. However, existing
methods either fail to adapt to changing scenario conditions or to optimize
energy efficiency system-wide. We propose CARMA: a context-aware sensor fusion
approach that uses context to dynamically reconfigure the computation flow on a
Field-Programmable Gate Array (FPGA) at runtime. By clock-gating unused sensors
and model sub-components, CARMA significantly reduces the energy used by a
multi-sensory object detector without compromising performance. We use a
Deep-learning Processor Unit (DPU) based reconfiguration approach to minimize
the latency of model reconfiguration. We evaluate multiple
context-identification strategies, propose a novel system-wide
energy-performance joint optimization, and evaluate scenario-specific
perception performance. Across challenging real-world sensing contexts, CARMA
outperforms state-of-the-art methods with up to 1.3x speedup and 73% lower
energy consumption.Comment: Accepted to be published in the 2023 ACM/IEEE International Symposium
on Low Power Electronics and Design (ISLPED 2023
Recommended from our members
Neuroscience Inspired Algorithms for the Predictive Maintenance of Manufacturing Systems
If machine failures can be detected preemptively, then maintenance and repairs can be performed more efficiently, reducing production costs. Many machine learning techniques for performing early failure detection using vibration data have been proposed; however, these methods are often power and data-hungry, susceptible to noise, and require large amounts of data preprocessing.
Also, training is usually only performed once before inference, so they do not learn and adapt as the machine ages.
Thus, we propose a method of performing online, real-time anomaly detection for predictive maintenance using Hierarchical Temporal Memory (HTM).
Inspired by the human neocortex, HTMs learn and adapt continuously and are robust to noise.
Using the Numenta Anomaly Benchmark, we empirically demonstrate that our approach outperforms state-of-the-art algorithms at preemptively detecting real-world cases of bearing failures and simulated 3D printer failures. Our approach achieves an average score of 64.71, surpassing state-of-the-art deep-learning (49.38) and statistical (61.06) methods
EcoFusion: Energy-Aware Adaptive Sensor Fusion for Efficient Autonomous Vehicle Perception
Autonomous vehicles use multiple sensors, large deep-learning models, and
powerful hardware platforms to perceive the environment and navigate safely. In
many contexts, some sensing modalities negatively impact perception while
increasing energy consumption. We propose EcoFusion: an energy-aware sensor
fusion approach that uses context to adapt the fusion method and reduce energy
consumption without affecting perception performance. EcoFusion performs up to
9.5% better at object detection than existing fusion methods with approximately
60% less energy and 58% lower latency on the industry-standard Nvidia Drive PX2
hardware platform. We also propose several context-identification strategies,
implement a joint optimization between energy and performance, and present
scenario-specific results.Comment: Accepted to be published in the 59th ACM/IEEE Design Automation
Conference (DAC 2022